|
In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (e.g., for systems of differential equations) depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960. ==Regular splittings== We seek to solve the matrix equation : where A is a given ''n'' × ''n'' non-singular matrix, and k is a given column vector with ''n'' components. We split the matrix A into : where B and C are ''n'' × ''n'' matrices. If, for an arbitrary ''n'' × ''n'' matrix M, M has nonnegative entries, we write M ≥ 0. If M has only positive entries, we write M > 0. Similarly, if the matrix M1 − M2 has nonnegative entries, we write M1 ≥ M2. Definition: A = B − C is a regular splitting of A if and only if B−1 ≥ 0 and C ≥ 0. We assume that matrix equations of the form : where g is a given column vector, can be solved directly for the vector x. If (2) represents a regular splitting of A, then the iterative method : where x(0) is an arbitrary vector, can be carried out. Equivalently, we write (4) in the form : The matrix D = B−1C has nonnegative entries if (2) represents a regular splitting of A. It can be shown that if A−1 > 0, then < 1, where represents the spectral radius of D, and thus D is a convergent matrix. As a consequence, the iterative method (5) is necessarily convergent. If, in addition, the splitting (2) is chosen so that the matrix B is a diagonal matrix (with the diagonal entries all non-zero, since B must be invertible), then B can be inverted in linear time (see Time complexity). 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Matrix splitting」の詳細全文を読む スポンサード リンク
|